摘要 :
The authors present a new training algorithm to be used on a four-layer perceptron-type feedforward neural network for the generation of binary-to-binary mappings. This algorithm is called the Boolean-like training algorithm (BLTA...
展开
The authors present a new training algorithm to be used on a four-layer perceptron-type feedforward neural network for the generation of binary-to-binary mappings. This algorithm is called the Boolean-like training algorithm (BLTA) and is derived from original principles of Boolean algebra followed by selected extensions. The algorithm can be implemented on analog hardware, using a four-layer binary feedforward neural network (BFNN). The BLTA does not constitute a traditional circuit building technique. Indeed, the rules which govern the BLTA allow for generalization of data in the face of incompletely specified Boolean functions. When compared with techniques which employ descent methods, training times are greatly reduced in the case of the BLTA. Also, when the BFNN is used in conjunction with A/D converters, the applicability of the present algorithm can be extended to accept real-valued inputs.
收起
摘要 :
An investigation was conducted of the qualitative properties of a class of neural networks described by a system of first-order ordinary differential equations with discontinuous right hand side. An efficient synthesis procedure i...
展开
An investigation was conducted of the qualitative properties of a class of neural networks described by a system of first-order ordinary differential equations with discontinuous right hand side. An efficient synthesis procedure is developed for this class of neural networks. The class of systems considered may be used as a representation of the analog Hopfield model with the nonlinearities having infinite gain. Also, under appropriate assumptions, the output of the class of systems considered may be viewed as representing the behavior of the discrete Hopfield model. Thus the results give insight into the qualitative behavior of the analog as well as the discrete Hopfield models, and they provide a means of designing such models. The applicability of the present results is demonstrated by several specific examples.
收起
摘要 :
An investigation was conducted of the qualitative properties of a class of neural networks described by a system of first-order linear ordinary differential equations which are defined on a closed hypercube of the state space with...
展开
An investigation was conducted of the qualitative properties of a class of neural networks described by a system of first-order linear ordinary differential equations which are defined on a closed hypercube of the state space with solutions extended to the boundary of the hypercube. When solutions are located on the boundary of the hypercube, the system is said to be in a saturated mode. The class of systems considered retains the basic structure of the Hopfield model but is easier to analyze, synthesize, and implement. An efficient analysis method is developed which can be used to determine completely the set of asymptotically stable equilibrium points and the set of unstable equilibrium points. The latter set can be used to estimate the domains of attraction for the elements of the former set. The class of systems considered can easily be implemented in analog integrated circuits. The applicability of the results is demonstrated by means of several examples.
收起
摘要 :
In the present paper we investigate several types of Lyapunov stability of an equilibrium x/sub e/ of a family of finite dimensional dynamical systems determined by ordinary differential (difference) equations. By utilizing the ex...
展开
In the present paper we investigate several types of Lyapunov stability of an equilibrium x/sub e/ of a family of finite dimensional dynamical systems determined by ordinary differential (difference) equations. By utilizing the extreme systems of the family of systems, we establish sufficient conditions, as well as necessary conditions (converse theorems) for several robust stability types. Our results enable us to realize a significant reduction in the computational complexity of the algorithm of Brayton and Tong (1979) in the construction of computer generated Lyapunov functions. Furthermore, we demonstrate the applicability of the present results by analyzing robust stability properties of equilibria for Hopfield neural networks and by analyzing the Hurwitz and Schur stability of interval matrices.
收起
摘要 :
We first formulate a model for hybrid dynamical systems which covers a very large class of systems and which is suitable for the qualitative analysis of such systems. Next, we introduce the notion of an invariant set for hybrid dy...
展开
We first formulate a model for hybrid dynamical systems which covers a very large class of systems and which is suitable for the qualitative analysis of such systems. Next, we introduce the notion of an invariant set for hybrid dynamical systems and we define several types of (Lyapunov-like) stability concepts for an invariant set. We then establish sufficient conditions for uniform stability, uniform asymptotic stability, exponential stability, and instability of an invariant set of hybrid dynamical systems. Under some mild additional assumptions, we also establish necessary conditions for some of the above stability types (converse theorems). In addition to the above, we also establish sufficient conditions for the uniform boundedness of the motions of hybrid dynamical systems (Lagrange stability). To demonstrate the applicability of the developed theory, we present specific examples of hybrid dynamical systems and we conduct a stability analysis of some of these examples.
收起
摘要 :
The authors develop a design technique for associative memories with learning and forgetting capabilities via artificial feedback neural networks. The proposed synthesis technique utilizes the eigenstructure method. Networks gener...
展开
The authors develop a design technique for associative memories with learning and forgetting capabilities via artificial feedback neural networks. The proposed synthesis technique utilizes the eigenstructure method. Networks generated by this method are capable of learning new patterns as well as forgetting existing patterns without the necessity of recomputing the entire interconnection weights and external inputs. In many respects, the results represent significant improvements over the outer product method, the projection learning rule, and the pseudo-inverse method with stability constraints. Several specific examples are given to illustrate the strengths and weaknesses of the methodology advocated.
收起
摘要 :
For pt.II see ibid., vol.35, no.10, p.1230-42 (1988). The authors develop an algorithm which enables them to obtain estimates of bounds for the set of all solutions of initial-value problems of linear systems of autonomous first-o...
展开
For pt.II see ibid., vol.35, no.10, p.1230-42 (1988). The authors develop an algorithm which enables them to obtain estimates of bounds for the set of all solutions of initial-value problems of linear systems of autonomous first-order ordinary differential equations that linearly depend on a parameter belonging to an interval. They demonstrate the applicability of their results by considering three specific examples: an RLC circuit, an instrument servomechanism, and the design of a minimum plant sensitivity optimal linear regulator.
收起
摘要 :
We first present results for the analysis and synthesis of a class of neural networks without any restrictions on the interconnecting structure. The class of neural networks which we consider have the structure of analog Hopfield ...
展开
We first present results for the analysis and synthesis of a class of neural networks without any restrictions on the interconnecting structure. The class of neural networks which we consider have the structure of analog Hopfield nets and utilize saturation functions to model the neurons. Our analysis results make it possible to locate in a systematic manner all equilibrium points of the neural network and to determine the stability properties of the equilibrium points. The synthesis procedure makes it possible to design in a systematic manner neural networks (for associative memories) which store all desired memory patterns as reachable memory vectors. We generalize the above results to develop a design procedure for neural networks with sparse coefficient matrices. Our results guarantee that the synthesized neural networks have predetermined sparse interconnection structures and store any set of desired memory patterns as reachable memory vectors. We show that a sufficient condition for the existence of a sparse neural network design is self feedback for every neuron in the network. We apply our synthesis procedure to the design of cellular neural networks for associative memories. Our design procedure for neural networks with sparse interconnecting structure can take into account various problems encountered in VLSI realizations of such networks. For example, our procedure can be used to design neural networks with few or without any line-crossings resulting from the network interconnections. Several specific examples are included to demonstrate the applicability of the methodology advanced herein.
收起
摘要 :
The authors develop a design technique for associative memories with learning and forgetting abilities via artificial feedback neural networks. The method utilizes the theory of large-scale interconnected dynamical systems, instea...
展开
The authors develop a design technique for associative memories with learning and forgetting abilities via artificial feedback neural networks. The method utilizes the theory of large-scale interconnected dynamical systems, instead of the usual energy methods. Networks synthesized by this design method are capable of learning new patterns as well as forgetting old patterns without recomputing the entire interconnection matrix. The method, in which the properties of pseudo-inverse matrices are used to iteratively solve systems of linear equations, provides significant improvements over the outer product method and the projection learning rule. Several specific examples are given to illustrate the strengths and weaknesses of the methodology.
收起
摘要 :
In contrast to the usual types of neural networks which utilize two states for each neuron, a class of synchronous discrete-time neural networks with multilevel threshold neurons is developed. A qualitative analysis and a synthesi...
展开
In contrast to the usual types of neural networks which utilize two states for each neuron, a class of synchronous discrete-time neural networks with multilevel threshold neurons is developed. A qualitative analysis and a synthesis procedure for the class of neural networks considered constitute the principal contributions of this paper. The applicability of the present class of neural networks is demonstrated by means of a gray level image processing example, where each neuron can assume one of sixteen values. When compared to the usual neural networks with two state neurons, networks which are endowed with multilevel neurons will, in general, for a given application, require fewer neurons and thus fewer interconnections. This is an important consideration in VLSI implementation.
收起